Skip to content

Conversation

@skcussm
Copy link

@skcussm skcussm commented Nov 7, 2025

-Added Pseudo FIM feature which provides in line completion for use with models not specifically designed for FIM. This allows for in line completions with models such as gpt-oss-20b/120b.
-Added Pseudo FIM system prompt preference which is used when Pseudo FIM is enabled in the models preferences.
-This feature works well in LM Studio/OpenAI and Ollama. Implemented, but untested for Mistral.
-When Pseudo FIM is not enabled, the behavior should be identical to the original FIM implementation (requiring a FIM enabled model).

Notes: Interestingly, Pseudo FIM mode still works well with the qwen2.5-coder models. It works reasonably well with meta-llama-3.1-8b-instruct, gemma-3-12b. It works very well with gpt-oss-20b.

Future possibilities: This new feature uses a system prompt to instruct the model to behave like a FIM model. Future work should be done to appropriately automatically supply more context with this system prompt (or the user prompt) so the Pseudo FIM model may consider fuller context in its completions.

Please test with Mistral as I am not set up to do that.

I tried to follow the existing codes style and approach. Please, feel free to improve this contribution.

-Added Pseudo FIM feature which provides in line completion for use with
models not specifically designed for FIM.  This allows for in line
completions with models such as gpt-oss-20b/120b.
-Added Pseudo FIM system prompt preference which is used when Pseudo FIM
is enabled in the models preferences.
-This feature works well in LM Studio/OpenAI and Ollama.  Implemented,
but untested for Mistral.
-When Pseudo FIM is not enabled, the behavior should be identical to the
original FIM implementation (requiring a FIM enabled model).

Notes: Interestingly, Pseudo FIM mode still works well with the
qwen2.5-coder models.  It works reasonably well with
meta-llama-3.1-8b-instruct, gemma-3-12b. It works very well with
gpt-oss-20b.

Future possibilities: This new feature uses a system prompt to instruct
the model to behave like a FIM model.  Future work should be done to
appropriately automatically supply more context with this system prompt
(or the user prompt) so the Pseudo FIM model may consider fuller context
in its completions.
@hetzge
Copy link
Owner

hetzge commented Nov 8, 2025

Wow, thank you very much for the contribution.

I'm currently traveling, but I'll take a deeper look at it as soon as possible.

}

private static String getPseduoFIMSystemPrompt(String fimSystemPrompt, boolean multilineEnabled) {
return fimSystemPrompt + (multilineEnabled?"":"\\nOnly generate a single line of code.");
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"\n"

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean? If you are talking about using the stops, tried that, but was getting bad results. The main model I was trying to get this to work on was gpt-oss-20b, and I think when using the stops it might have been getting stopped to early while 'reasoning' before it generated the response that was used (that or specifying those stops overrode it's default stops causing other issues).

I'm not sure how to address this.

}

private static String getPseduoFIMSystemPrompt(String fimSystemPrompt, boolean multilineEnabled) {
return fimSystemPrompt + (multilineEnabled?"":"\\nOnly generate a single line of code.");
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could be nicer to do this in the prompts jinja template (more flexibility, better integration in the prompt)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I follow. Let the user make the single/multi line instruction through the user editable prompt instead of with the multi-line preference?

@hetzge
Copy link
Owner

hetzge commented Nov 14, 2025

I added a few comments. You can improve the points if you want (and have time). Otherwise i would merge the pull request and improve on my side.

I tested with mistral and it works fine.

I had already such feature in a very early version of this plugin. But i was unhappy with the completions (many formatting and duplication of existing code in the results). I think now it is a little bit better and as optional alternative way it is a good option.

Thank you again for the contribution.

@hetzge
Copy link
Owner

hetzge commented Nov 14, 2025

I also added my formatter configuration here: https://github.com/hetzge/eclipse-ai-coder/blob/master/formatter.xml

-Moved Pseudo FIM setting to general preferences
-Removed 'java 21' spec from default pseudo FIM template
-Remove instruction to implement comments from preceding lines
@skcussm
Copy link
Author

skcussm commented Nov 21, 2025

I also added my formatter configuration here: https://github.com/hetzge/eclipse-ai-coder/blob/master/formatter.xml

Copy. I'll try to use this going forward for any code updates to this repo. There is an option in github to ignore the white space that makes the actual changes much clearer even if the formatting was slightly different.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants